从一个非常少数标记的样品中学习新颖的课程引起了机器学习区域的越来越高。最近关于基于元学习或转移学习的基于范例的研究表明,良好特征空间的获取信息可以是在几次拍摄任务上实现有利性能的有效解决方案。在本文中,我们提出了一种简单但有效的范式,该范式解耦了学习特征表示和分类器的任务,并且只能通过典型的传送学习培训策略从基类嵌入体系结构的特征。为了在每个类别内保持跨基地和新类别和辨别能力的泛化能力,我们提出了一种双路径特征学习方案,其有效地结合了与对比特征结构的结构相似性。以这种方式,内部级别对齐和级别的均匀性可以很好地平衡,并且导致性能提高。三个流行基准测试的实验表明,当与简单的基于原型的分类器结合起来时,我们的方法仍然可以在电感或转换推理设置中的标准和广义的几次射击问题达到有希望的结果。
translated by 谷歌翻译
传统的检测网络通常需要丰富的标记训练样本,而人类可以只有几个例子逐步学习新概念。本文侧重于更具挑战性,而是逼真的类渐进的少量对象检测问题(IFSD)。它旨在逐渐逐渐地将新型对象的模型转移到几个注释的样本中,而不会灾难性地忘记先前学识的样本。为了解决这个问题,我们提出了一种新的方法,最小的方法可以减少遗忘,更少的培训资源和更强的转移能力。具体而言,我们首先介绍转移策略,以减少不必要的重量适应并改善IFSD的传输能力。在此基础上,我们使用较少的资源消耗方法整合知识蒸馏技术来缓解遗忘,并提出基于新的基于聚类的示例选择过程,以保持先前学习的更多辨别特征。作为通用且有效的方法,最多可以在很大程度上提高各种基准测试的IFSD性能。
translated by 谷歌翻译
多变量时间序列(MTS)预测在智能应用的自动化和优化中起着重要作用。这是一个具有挑战性的任务,因为我们需要考虑复杂的变量依赖关系和可变间依赖关系。现有的作品仅在单个可变依赖项的帮助下学习时间模式。然而,许多真实世界MTS中有多种时间模式。单个可变间依赖项使模型更倾向于学习一种类型的突出和共享的时间模式。在本文中,我们提出了一个多尺度自适应图形神经网络(MOLDN)来解决上述问题。 MOLDN利用多尺度金字塔网络,以在不同的时间尺度上保留潜在的时间依赖关系。由于可变间依赖关系可以在不同的时间尺度下不同,所以自适应图学习模块被设计为在没有预先定义的前沿的情况下推断规模特定的可变依赖关系。鉴于多尺度特征表示和规模特定的可变间依赖关系,引入了一个多尺度的时间图神经网络,以共同模拟帧内依赖性和可变间依赖性。之后,我们开发一个尺度明智的融合模块,以在不同时间尺度上有效地促进协作,并自动捕获贡献的时间模式的重要性。四个真实数据集的实验表明,Magnn在各种设置上表明了最先进的方法。
translated by 谷歌翻译
多变量时间序列(MTS)预测在许多智能应用中引起了很多关注。它不是一个琐碎的任务,因为我们需要考虑一个可变的依赖关系和可变间依赖关系。但是,现有的作品是针对特定场景设计的,需要很多域知识和专家努力,这难以在不同的场景之间传输。在本文中,我们提出了一种尺度意识的神经结构,用于MTS预测(SNAS4MTF)的搜索框架。多尺度分解模块将原始时间序列转换为多尺度子系列,可以保留多尺度的时间模式。自适应图形学习模块在没有任何先前知识的情况下,在不同的时间尺度下递送不同的变量间依赖关系。对于MTS预测,搜索空间旨在在每次尺度上捕获可变的可变依赖性和可变间依赖关系。在端到端框架中共同学习多尺度分解,自适应图学习和神经架构搜索模块。两个现实世界数据集的大量实验表明,与最先进的方法相比,SNAS4MTF实现了有希望的性能。
translated by 谷歌翻译
许多应用程序需要在许多系统性能指标上收集不同变量或测量的数据。我们将这些措施或变量广泛地术语。沿着每种测量的数据收集通常会引发成本,因此希望考虑建模中的措施成本。这是成本敏感学习领域的一个相当新的问题。已经尝试结合和选择措施来纳入成本。然而,现有的研究要么不严格执行预算限制,或者不是“大多数人”的成本效益。随着专注于分类问题,我们提出了一种计算有效的方法,可以通过探索解决空间的最多的“有希望”部分来找到给定预算下的近最佳模式。我们而不是输出单个模型,我们生成模型计划 - 通过模型成本和预期预测精度排序的模型列表。这可用于在给定预算下选择具有最佳预测准确性的模型,或在预算和预测准确性之间进行换算。在一些基准数据集上的实验表明,我们的方法对竞争方法有利地进行了比较。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译